RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1368211 - RHEL7: device-mapper-multipath fails when removing more than one device.
Summary: RHEL7: device-mapper-multipath fails when removing more than one device.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: device-mapper-multipath
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Ben Marzinski
QA Contact: Lin Li
Steven J. Levine
URL:
Whiteboard:
Depends On: 1368191
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-18 17:04 UTC by Rodrigo A B Freire
Modified: 2022-03-13 14:05 UTC (History)
21 users (show)

Fixed In Version: device-mapper-multipath-0.4.9-100.el7
Doc Type: Enhancement
Doc Text:
New "remove retries" multipath configuration value If a multipath device is temporarily in use when multipath tries to remove it, the remove will fail. It is now possible to control the number of times that the "multipath" command will retry removing a multipath device that is busy by setting the "remove_retries" configuration value. The default value is 0, in which case multipath will not retry failed removes.
Clone Of: 1368191
Environment:
Last Closed: 2017-08-01 16:34:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
retry check for opened device up to three times. (2.07 KB, patch)
2016-08-25 18:43 UTC, Ben Marzinski
no flags Details | Diff
Updated retry check patch (2.43 KB, patch)
2016-08-29 15:14 UTC, Ben Marzinski
no flags Details | Diff
New versions of the retry patch (918 bytes, application/x-gzip)
2016-09-05 15:33 UTC, Ben Marzinski
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1592520 0 None None None 2016-08-18 17:04:06 UTC
Red Hat Knowledge Base (Solution) 2387621 0 None None None 2016-08-18 17:04:06 UTC
Red Hat Knowledge Base (Solution) 2490251 0 None None None 2016-08-18 17:04:06 UTC
Red Hat Product Errata RHBA-2017:1961 0 normal SHIPPED_LIVE device-mapper-multipath bug fix and enhancement update 2017-08-01 17:56:09 UTC

Description Rodrigo A B Freire 2016-08-18 17:04:07 UTC
+++ This bug was initially created as a clone of Bug #1368191 +++

Description of problem:
* Sometimes multipath has problems when detaching a multipath volume. This is a quite common scenario and is expressed in environments 3 or more multipath devices.

Version-Release number of selected component (if applicable):
device-mapper-multipath-0.4.9-85.el7.x86_64

How reproducible:
* Easily

Steps to Reproduce:
1. Create a server with 3 or more multipath FC-attached devices
2. Repeatedly, remove, scan and add multipath LUNs
3. Run the following script

while true; do for MPATH in <WWID 1> <WWID 2> <WWID 3> ; do DEVICES=`multipath -l $MPATH | grep runnin | awk '{print  substr ($_,6,8)}' `; echo "Flushing: multipath -f $MPATH"; if ! multipath -f $MPATH; then exit 1 ; fi ; for DEVICE in $DEVICES; do echo "Deleting: echo 1 > /sys/bus/scsi/drivers/sd/$DEVICE/delete"; echo 1 > /sys/bus/scsi/drivers/sd/$DEVICE/delete ; done ; done ; LC=`multipath -ll|wc -l` ; multipath -ll ; if [ "$LC" != "0" ]; then exit 1; fi  ; sleep 10; rescan-scsi-bus.sh -i ; sleep 2; multipath -r ; done

Actual results:
* The script will be interrupted at some time with the following error:

Flushing: multipath -f 360060160c8e035007fcf98b1e85fe611
Aug 18 12:58:45 | 360060160c8e035007fcf98b1e85fe611: map in use
Aug 18 12:58:45 | failed to remove multipath map 360060160c8e035007fcf98b1e85fe611

Expected results:
* Since there are no processes touching multipath, the flush should have happened cleanly.


Additional info:
* The presence or absence of LVM volumes are not relevant for this problem.
* A mere retry is sufficient for a exit error 1 (map in use). Retrying it will flush the lingering device.
* queue_if_no_path is of no influence in this error. It will happen with or without queue_if_no_path.
* If you try to remove the underlying devices that builds the multipath and have queue_if_no_path enabled, you will eventually find a D-state stuck systemd-udev worker.

As per Red Hat Documentation [1], the canonical way to remove a multipath device is:

1. Close all files
2. Unmount the device
3. Remove the lvm part (not relevant in our issue here)
4. multipath -l to enumerate the underlying devices that are part of a multipath
4.1 multipath -f <WWID>
5. flush devices (blockdev --flushbufs /dev/sd)
6. remove any existing references to the /dev/sd devices
7. echo delete for each /dev/sd device.

This bug's error happens in step 4.1.

--
[1] - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/removing_devices.html

Comment 2 Rodrigo A B Freire 2016-08-19 14:01:56 UTC
Human-readable reproducer:

while true
  do for MPATH in <WWID 1> <WWID 2> <WWID 3> 
    do DEVICES=`multipath -l $MPATH | grep runnin | awk '{print  substr ($_,6,8)}' `
      echo "Flushing: multipath -f $MPATH" 
      multipath -f $MPATH 
      RETFIRST=$? 
# IF the first multipath -f fails, give it a second and try again
      if [ "$RETFIRST" != 0 ] 
        then echo "First flush failed Returned $RETFIRST  Trying again. Sleeping 1 second."
        logger "First flush failed Returned $RETFIRST  Trying again. Sleeping 1 second."
        sleep 1
        echo "multipath -f $MPATH" 
        multipath -f $MPATH 
        RETSECOND=$? 
# IF it fails the second time, throw a error and exit
          if [ $RETSECOND != 0 ] 
          then echo "Second flush failed Returned $RETSECOND."
          logger "Second flush failed Returned $RETSECOND."
          exit 1
        fi 
# Codepath for second-try flush.
        echo "Second fush success. Returned $RETSECOND"
        logger "Second fush success. Returned $RETSECOND"
      fi
      for DEVICE in $DEVICES
        do echo "Deleting: echo 1 > /sys/bus/scsi/drivers/sd/$DEVICE/delete"
        echo 1 > /sys/bus/scsi/drivers/sd/$DEVICE/delete 
      done 
    done 
  multipath -ll 
  sleep 10
  rescan-scsi-bus.sh -i 
  sleep 2
  multipath -r 
done

Comment 4 Ben Marzinski 2016-08-25 18:43:41 UTC
Created attachment 1194080 [details]
retry check for opened device up to three times.

Instead of failing immediately if dm says that the device is in-use, multipath will check up to 3 times with a 1 second sleep in between before failing the remove.

Comment 6 Ben Marzinski 2016-08-29 15:14:49 UTC
Created attachment 1195413 [details]
Updated retry check patch

This version of the patch also, rechecks the number of partitions on each retry, but more importantly, it releases the dm context after each call, since that was keeping us from getting an updated open count.

Comment 10 Ben Marzinski 2016-08-29 20:52:38 UTC
Like I mentioned on IRC, the problem with this patch is that it makes the common case (where removing a device fails because it is actually in use) slow. In fact, if someone had a large number of devices that were being used, running

# multipath -F

could take minutes. So, I'd like to make these retries optional, by adding another command option "-R". Adding "-R" to a command would make it retry in
cases where the device was in use. Does this sound reasonable?

Comment 11 Rodrigo A B Freire 2016-08-29 21:08:08 UTC
(In reply to Ben Marzinski from comment #10)
> Like I mentioned on IRC, the problem with this patch is that it makes the
> common case (where removing a device fails because it is actually in use)
> slow. In fact, if someone had a large number of devices that were being
> used, running
> 
> # multipath -F
> 
> could take minutes. So, I'd like to make these retries optional, by adding
> another command option "-R". Adding "-R" to a command would make it retry in
> cases where the device was in use. Does this sound reasonable?

Make sense, sounds logical. I have no problems with it.

Comment 12 Ben Marzinski 2016-09-05 15:33:40 UTC
Created attachment 1197979 [details]
New versions of the retry patch

This tarball contains the RHEL-7.2 and RHEL-7.3 versions of this patch.  I have tested both on the machine that can recreate the issue, and both have run for a day without issues.  Since I've been able to understand and avoid the problems that the previous patches were having, I have a high degree of confidence that these will work, but you should still verify them yourself, Rodrigo.

Comment 13 Rodrigo A B Freire 2016-09-05 15:41:26 UTC
Comment on attachment 1197979 [details]
New versions of the retry patch

Unchecking the isPatch flag, so I can download it!

Comment 17 Ben Marzinski 2016-09-06 21:40:10 UTC
Controlling the number of remove retries will be done by setting "remove_retries" in the defaults section of /etc/multipath.conf. It will default to zero, which is the current behavior.  This bug is too late to make 7.3, but I'm fine with releasing the fix as a zstream.

Comment 19 Rodrigo A B Freire 2016-11-10 09:43:14 UTC
Hi Benjamin!

Do you have the upstream post, so I can use it as the binding point for a OpenStack change request?

Thanks!!

Comment 20 Ben Marzinski 2016-11-21 15:37:43 UTC
Here's the upstream thread

https://www.redhat.com/archives/dm-devel/2016-November/msg00085.html

and here's the upstream commit

http://git.opensvc.com/gitweb.cgi?p=multipath-tools/.git;a=commit;h=4a2b3e75719f90e356408401d3c43210a0b2e111

But you should know that rhel multipath has is not going to sync with upstream
again until the next major version of RHEL. There is enough churn going on right now, that even fedora isn't tracking it.

Comment 23 Steven J. Levine 2017-05-08 18:37:33 UTC
Ben:  Could you check over the way I summarized the doc text for the release notes for this feature?

Comment 24 Ben Marzinski 2017-05-08 19:07:55 UTC
Looks good.

Comment 28 Lin Li 2017-06-12 02:01:37 UTC
Verified on device-mapper-multipath-0.4.9-111.el7
[root@storageqe-06 ~]# rpm -qa | grep multipath
device-mapper-multipath-0.4.9-111.el7.x86_64
device-mapper-multipath-libs-0.4.9-111.el7.x86_64

# man multipath.conf
      remove_retries   This sets how may times multipath will retry removing
                        a  device that is in-use.  Between each attempt, mul‐
                        tipath will sleep 1 second. The default is 0

# multipathd show config | grep remove_retries
	remove_retries 0          -------->The default value is 0


Edit /etc/multipath.conf to set remove_retries 3 
# cat /etc/multipath.conf
defaults {
	find_multipaths no
	user_friendly_names yes
        disable_changed_wwids yes
        remove_retries 3            <--------------
}

[root@storageqe-06 ~]# service multipathd reload
Redirecting to /bin/systemctl reload multipathd.service


[root@storageqe-06 ~]# fdisk -l

Disk /dev/sda: 146.8 GB, 146778685440 bytes, 286677120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000f0150

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048        4095        1024   83  Linux
/dev/sda2   *        4096     1028095      512000   83  Linux
/dev/sda3         1028096    17545215     8258560   82  Linux swap / Solaris
/dev/sda4        17545216   286676991   134565888    5  Extended
/dev/sda5        17547264   286676991   134564864   83  Linux

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x0003afef

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     2099199     1048576   83  Linux
/dev/sdb2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/sdc: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x000b9755

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     4194303     2096128   8e  Linux LVM

Disk /dev/mapper/360a98000324669436c2b45666c56786d: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x0003afef

                                        Device Boot      Start         End      Blocks   Id  System
/dev/mapper/360a98000324669436c2b45666c56786d1   *        2048     2099199     1048576   83  Linux
/dev/mapper/360a98000324669436c2b45666c56786d2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/sdd: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sde: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdf: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdg: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x0003afef

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1   *        2048     2099199     1048576   83  Linux
/dev/sdg2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/sdh: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x000b9755

   Device Boot      Start         End      Blocks   Id  System
/dev/sdh1            2048     4194303     2096128   8e  Linux LVM

Disk /dev/sdj: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdk: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdl: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x0003afef

   Device Boot      Start         End      Blocks   Id  System
/dev/sdl1   *        2048     2099199     1048576   83  Linux
/dev/sdl2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/sdm: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x000b9755

   Device Boot      Start         End      Blocks   Id  System
/dev/sdm1            2048     4194303     2096128   8e  Linux LVM

Disk /dev/sdn: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdo: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdp: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdq: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x0003afef

   Device Boot      Start         End      Blocks   Id  System
/dev/sdq1   *        2048     2099199     1048576   83  Linux
/dev/sdq2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/sdr: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk label type: dos
Disk identifier: 0x000b9755

   Device Boot      Start         End      Blocks   Id  System
/dev/sdr1            2048     4194303     2096128   8e  Linux LVM

Disk /dev/sds: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdt: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/sdu: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/mapper/rhel_storageqe--06-root: 18.2 GB, 18249416704 bytes, 35643392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/mapper/rhel_storageqe--06-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/mapper/360a98000324669436c2b45666c567873: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes


Disk /dev/mapper/360a98000324669436c2b45666c567875: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes



 
[root@storageqe-06 ~]# mkfs.ext3 /dev/mapper/360a98000324669436c2b45666c567875
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=16 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 


[root@storageqe-06 ~]# mount /dev/mapper/360a98000324669436c2b45666c567875 /mnt
[root@storageqe-06 ~]# 
[root@storageqe-06 ~]# echo $?
0

[root@storageqe-06 ~]# multipath -f /dev/mapper/360a98000324669436c2b45666c567875
Jun 11 21:44:34 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:44:35 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:44:36 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:44:37 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:44:37 | failed to remove multipath map /dev/mapper/360a98000324669436c2b45666c567875
---------------------------->----------------------------->remove_retries set to 3, it will retry 3 times





Edit /etc/multipath.conf to set remove_retries 6
[root@storageqe-06 ~]# cat /etc/multipath.conf
defaults {
	find_multipaths no
	user_friendly_names yes
        disable_changed_wwids yes
        remove_retries 6            <--------------
}

[root@storageqe-06 ~]# service multipathd reload
Redirecting to /bin/systemctl reload multipathd.service

[root@storageqe-06 ~]# multipath -f /dev/mapper/360a98000324669436c2b45666c567875
Jun 11 21:48:35 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:48:36 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:48:37 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:48:38 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:48:39 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:48:40 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:48:41 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:48:41 | failed to remove multipath map /dev/mapper/360a98000324669436c2b45666c567875
----------------------------->remove_retries set to 6, it will retry 6 times




Edit /etc/multipath.conf to set remove_retries 0
[root@storageqe-06 ~]# cat /etc/multipath.conf
defaults {
	find_multipaths no
	user_friendly_names yes
        disable_changed_wwids yes
        remove_retries 0            <--------------
}

[root@storageqe-06 ~]# service multipathd reload
Redirecting to /bin/systemctl reload multipathd.service

[root@storageqe-06 ~]# multipath -f /dev/mapper/360a98000324669436c2b45666c567875
Jun 11 21:51:34 | /dev/mapper/360a98000324669436c2b45666c567875: map in use
Jun 11 21:51:34 | failed to remove multipath map /dev/mapper/360a98000324669436c2b45666c567875
------------------------------>remove_retries set to 0, it  simply fails.





Edit /etc/multipath.conf to set remove_retries 8
[root@storageqe-06 ~]# cat /etc/multipath.conf
defaults {
	find_multipaths no
	user_friendly_names yes
        disable_changed_wwids yes
        remove_retries 8            <--------------
}

[root@storageqe-06 ~]# service multipathd reload
Redirecting to /bin/systemctl reload multipathd.service

[root@storageqe-06 ~]# umount /dev/mapper/360a98000324669436c2b45666c567875
[root@storageqe-06 ~]# echo $?
0

[root@storageqe-06 ~]# multipath -f /dev/mapper/360a98000324669436c2b45666c567875
[root@storageqe-06 ~]# echo $?
0
----------------------------------->stops using the multipath device it  successfully removes the device.

Comment 29 errata-xmlrpc 2017-08-01 16:34:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1961


Note You need to log in before you can comment on or make changes to this bug.