Bug 1511010 - [GSS] [Tracking] gluster-block multipath device not being fully cleaned up after pod removal
Summary: [GSS] [Tracking] gluster-block multipath device not being fully cleaned up af...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-block
Version: cns-3.6
Hardware: All
OS: Linux
medium
low
Target Milestone: ---
: ---
Assignee: Prasanna Kumar Kalever
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1585581
Blocks: 1573420 1622458
TreeView+ depends on / blocked
 
Reported: 2017-11-08 14:11 UTC by Matthew Robson
Modified: 2023-09-14 04:11 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-03 06:05:04 UTC
Embargoed:


Attachments (Terms of Use)
log from pod shutdown with block PV (170.98 KB, text/plain)
2017-11-08 14:11 UTC, Matthew Robson
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1585581 0 high CLOSED multipath device couldn't be fully cleaned up after the iscsi devices been logged out due to systemd-udevd process still... 2021-02-22 00:41:40 UTC

Internal Links: 1585581

Description Matthew Robson 2017-11-08 14:11:27 UTC
Created attachment 1349460 [details]
log from pod shutdown with block PV

Description of problem:

Bring up a gluster block backed pod and you see the multipath device with 3x paths.

[cloud-user@osenode4 ~]$ sudo multipath -ll
mpathb (360014050b2a1e10336a4600ae4c2eec5) dm-32 LIO-ORG ,TCMU device
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 5:0:0:0 sda 8:0  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:0 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 7:0:0:0 sdc 8:32 active ready running

When the POD is moved / scaled down / deleted, the the muultipath device remains:

[cloud-user@osenode4 ~]$ sudo multipath -ll
mpathb (360014050b2a1e10336a4600ae4c2eec5) dm-32
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw

After the fact, a second flush removes it:

[cloud-user@osenode4 ~]$ sudo multipath -f mpathb
Nov 08 09:04:15 | mpathb: map in use
Nov 08 09:04:15 | failed to remove multipath map mpathb

[cloud-user@osenode4 ~]$ sudo multipath -f mpathb
[cloud-user@osenode4 ~]$ sudo multipath -ll

Log Attached.

Shutdown At: Nov 8 08:58:00

First Multipath -f At (failed): Nov 8 09:04:15

Second Multipath -f At (successful): Nov 8 09:04:38

Version-Release number of selected component (if applicable):

CNS 3.6

How reproducible:

Always

Steps to Reproduce:
1. Deploy pod with block PV
2. Scale down to 0
3. Check multipath 

Actual results:

Left over device

Expected results:

Everything gets cleaned up

Additional info:

Comment 40 Amar Tumballi 2018-11-19 08:45:08 UTC
How do we go about 'resolving' the bug? It is open from more than a year. gluster-block in general in OCS releases got much more stabler. But I see customer issue is still open, and we should need decision to proceed further.

And as mentioned above, as it is blocked on some systemd bug, not targetted for any time in near future, we should consider letting customer know and take appropriate action (CLOSED/WONTFIX?) on the bug ?

Comment 46 Prasanna Kumar Kalever 2020-12-03 06:05:04 UTC
We are not seeing this bug these days. Closing it, feel free to reopen if you come across.

Comment 47 Red Hat Bugzilla 2023-09-14 04:11:29 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.