RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1585581 - multipath device couldn't be fully cleaned up after the iscsi devices been logged out due to systemd-udevd process still using the multipath deivce
Summary: multipath device couldn't be fully cleaned up after the iscsi devices been lo...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: systemd
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: systemd-maint
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks: 1511010 1554642 1630908 1657156 1719445 1801675
TreeView+ depends on / blocked
 
Reported: 2018-06-04 07:12 UTC by Xiubo Li
Modified: 2020-11-11 21:56 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-11 21:56:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1511010 0 medium CLOSED [GSS] [Tracking] gluster-block multipath device not being fully cleaned up after pod removal 2023-09-14 04:11:29 UTC

Internal Links: 1511010

Description Xiubo Li 2018-06-04 07:12:21 UTC
Description of problem:

In our gluster and openshift product env:

Bring up a gluster block backed pod and you see the multipath device with 3x paths.

[cloud-user@osenode4 ~]$ sudo multipath -ll
mpathb (360014050b2a1e10336a4600ae4c2eec5) dm-32 LIO-ORG ,TCMU device
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 5:0:0:0 sda 8:0  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:0 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 7:0:0:0 sdc 8:32 active ready running

When the POD is moved / scaled down / deleted, the the muultipath device remains:

[cloud-user@osenode4 ~]$ sudo multipath -ll
mpathb (360014050b2a1e10336a4600ae4c2eec5) dm-32
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw

After the fact, a second flush removes it:

[cloud-user@osenode4 ~]$ sudo multipath -f mpathb
Nov 08 09:04:15 | mpathb: map in use
Nov 08 09:04:15 | failed to remove multipath map mpathb



Version-Release number of selected component (if applicable):


How reproducible locally:

The HA is 3.

Currently, I have two envs about the client side and the related rpm packages versions are:

CLIENT_1, which is using the latest packages and the related packages are:

systemd-sysv-219-57.el7.x86_64
systemd-219-57.el7.x86_64
systemd-libs-219-57.el7.x86_64
device-mapper-multipath-0.4.9-119.el7.x86_64
device-mapper-multipath-libs-0.4.9-119.el7.x86_64
iscsi-initiator-utils-iscsiuio-6.2.0.874-7.el7.x86_64
iscsi-initiator-utils-6.2.0.874-7.el7.x86_64

CLIENT_2:

systemd-sysv-219-42.el7_4.7.x86_64
systemd-libs-219-42.el7_4.7.x86_64
systemd-219-42.el7_4.7.x86_64
systemd-devel-219-42.el7_4.7.x86_64
device-mapper-multipath-0.4.9-119.el7.x86_64
device-mapper-multipath-libs-0.4.9-119.el7.x86_64
iscsi-initiator-utils-6.2.0.874-7.el7.x86_64
iscsi-initiator-utils-iscsiuio-6.2.0.874-7.el7.x86_64

The test bash script command like:

# while [ 1 ]; do iscsiadm -m discovery -t st -p 192.168.195.135 -l && sleep 2; iscsiadm -m node --logoutall=all && sleep 1; lsblk >> /tmp/a.txt; sleep 1; done

The results are:

CLIENT_1: it is very easy to reproduce this bug(almost very time).
CLIENT_2: have test this for 300 times and couldn't reproduce this.

The following is the logs in CLIENT_1:
......
May 30 05:50:19 client multipathd[532]: dm-3: remove map (uevent)
May 30 05:50:19 client multipathd[532]: dm-3: remove map (uevent)
May 30 05:50:19 client multipathd[532]: uevent trigger error
May 30 05:50:19 client multipathd[532]: dm-3: mapname not found for 253:3
May 30 05:50:19 client systemd-udevd[3963]: inotify_add_watch(7, /dev/dm-3, 10) failed: No such file or directory
May 30 05:50:19 client multipathd[532]: mpathu: Entering recovery mode: max_retries=120
May 30 05:50:19 client multipathd[532]: mpathw: Entering recovery mode: max_retries=120
May 30 05:50:19 client multipathd[532]: sdg [8:96]: path removed from map mpathu
May 30 05:50:19 client multipathd[532]: mpathu: Entering recovery mode: max_retries=120
May 30 05:50:19 client multipathd[532]: mpathu: load table [0 2097152 multipath 1 queue_if_no_path 0 0 0]
May 30 05:50:19 client multipathd[532]: mpathu: can't flush
May 30 05:50:19 client multipathd[532]: mpathu: map in use
May 30 05:50:19 client multipathd[532]: sdg: remove path (uevent)
May 30 05:50:19 client multipathd[532]: mpathv: removed map after removing all paths
May 30 05:50:19 client multipathd[532]: mpathv: stop event checker thread (140678910977792)
May 30 05:50:19 client multipathd[532]: mpathv: devmap removed
May 30 05:50:19 client multipathd[532]: sde: remove path (uevent)
May 30 05:50:19 client multipathd[532]: sdc [8:32]: path removed from map mpathw
May 30 05:50:19 client multipathd[532]: mpathw: Entering recovery mode: max_retries=120
May 30 05:50:19 client multipathd[532]: mpathw: load table [0 2 multipath 1 queue_if_no_path 0 0 0]
May 30 05:50:19 client kernel: device-mapper: multipath: Failing path 8:96.
May 30 05:50:19 client multipathd[532]: mpathw: can't flush
May 30 05:50:19 client multipathd[532]: mpathw: map in use
May 30 05:50:19 client multipathd[532]: sdc: remove path (uevent)
May 30 05:50:19 client multipathd[532]: sdd [8:48]: path removed from map mpathv
May 30 05:50:19 client multipathd[532]: mpathv: load table [0 2 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:64 1]
May 30 05:50:19 client kernel: device-mapper: multipath: Failing path 8:32.
......

==============================

Then just upgrade the systemd package to the latest version for the CLIENT_2, it reproduced very easily again just as in CLIENT_1.


For CLIENT_2 the /tmp/a.txt file will be something like:

......
  17 NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  18 sda               8:0    0   35G  0 disk
  19 ├─sda1            8:1    0    1G  0 part /boot
  20 └─sda2            8:2    0   34G  0 part
  21   ├─centos-root 253:0    0   32G  0 lvm  /
  22   └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
  23 sdb               8:16   0   20G  0 disk /data
  24 sr0              11:0    1  792M  0 rom
......

For CLIENT_1 the /tmp/a.txt file will be something like:

  1 NAME            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
  2 sda               8:0    0   20G  0 disk
  3 ├─sda1            8:1    0    1G  0 part  /boot
  4 └─sda2            8:2    0   19G  0 part
  5   ├─centos-root 253:0    0   17G  0 lvm   /
  6   └─centos-swap 253:1    0    2G  0 lvm   [SWAP]
  7 sr0              11:0    1  792M  0 rom
  8 mpathx          253:8    0    1K  0 mpath

......

And

[root@client ~]# dmsetup info mpathx
Name:              mpathy
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      1
Major, minor:      253, 8
Number of targets: 1
UUID: mpath-8faeaa7fc6f4f20a036001405b6c0a70b

[root@client ~]# fuser -vam /dev/mapper/mpathx
                     USER        PID ACCESS COMMAND
/dev/dm-8:           root       4989 f.... systemd-udevd
[root@client ~]# 


Actual results:


Multipath device is still exist after the iscsi devices is logged out.

Expected results:

Multipath device is cleaned up after the iscsi devices is logged out.


Additional info:

Comment 2 Michal Sekletar 2018-08-27 12:24:34 UTC
Sorry, but so far I was unable to reproduce the issue. I am still working on the local reproducer. I will keep you posted about any updates.

Comment 19 Chris Williams 2020-11-11 21:56:14 UTC
Red Hat Enterprise Linux 7 shipped it's final minor release on September 29th, 2020. 7.9 was the last minor releases scheduled for RHEL 7.
From intial triage it does not appear the remaining Bugzillas meet the inclusion criteria for Maintenance Phase 2 and will now be closed. 

From the RHEL life cycle page:
https://access.redhat.com/support/policy/updates/errata#Maintenance_Support_2_Phase
"During Maintenance Support 2 Phase for Red Hat Enterprise Linux version 7,Red Hat defined Critical and Important impact Security Advisories (RHSAs) and selected (at Red Hat discretion) Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available."

If this BZ was closed in error and meets the above criteria please re-open it flag for 7.9.z, provide suitable business and technical justifications, and follow the process for Accelerated Fixes:
https://source.redhat.com/groups/public/pnt-cxno/pnt_customer_experience_and_operations_wiki/support_delivery_accelerated_fix_release_handbook  

Feature Requests can re-opened and moved to RHEL 8 if the desired functionality is not already present in the product. 

Please reach out to the applicable Product Experience Engineer[0] if you have any questions or concerns.  

[0] https://bugzilla.redhat.com/page.cgi?id=agile_component_mapping.html&product=Red+Hat+Enterprise+Linux+7


Note You need to log in before you can comment on or make changes to this bug.